Group Project 1: Developing a Plan for Evaluation This week, you and your group members will develop a plan for evaluation to be presented in your curriculum design proposal. For the purposes of your curriculum design proposal, you will focus on developing a plan for summative evaluation. Consider the procedures for summative evaluation described in your learning resources. What might the goals of evaluation be for your group’s curriculum design? What should the indicators of success be, and what would be the best orientation and design for the evaluation?
By Monday, the Week 5 Facilitator should create a page in the group wiki designated for “Evaluation.”
By Tuesday, post to your group wiki a description of the evaluation plan you would recommend be implemented in conjunction with your curriculum design proposal. Include a description of the indicators of success, the orientation of the evaluation, and how the evaluation would be conducted in order to gather the necessary data. Return to your group wiki and review the evaluation plans posted by your group members. How do your ideas compare with those of your colleagues? Do some ideas overlap? Do the ideas of your colleagues cause you to have a different perspective?
By Wednesday, each group member should post his or her suggestion for the group’s final plan for evaluation. This suggestion should reflect a combination of what you consider to be the best ideas from all group members. It is the responsibility of the Week 5 Facilitator to review each group member’s suggestion and create a final plan for evaluation that reflects the majority opinion.
By Thursday, the Week 5 Facilitator should post the final plan for evaluation to the group wiki. The plan should include a description of the indicators of success, the orientation of the evaluation, and at least one specific example of how the evaluation would be conducted in order to gather the necessary data. Group members should visit the wiki, review the plan, and use the wiki or other communication means to resolve any disagreements. In addition, the Facilitator should post the URL of this wiki content to the Week 5 area of the Group Project discussion board. Your Instructor will visit your group’s wiki to ensure that all group members participated in this assignment and to approve your group’s plan for evaluation.
Note: Please see the Group Project 2 area for a table summarizing the curriculum design proposal tasks due this week. Use this table to help budget your time and ensure you complete assignments ontime.
Kyong’s suggestion on Developing a Plan for Evaluation
The indication of success is determined by the result in the volunteers learning a new skill and knowledge from the training program and applying it to the task. Another indication is that the training program has successfully increased the numbers of patrons and funds to the museum. This indication of success is congruent to the original expectation of the client and the task requested. Therefore, the instructor and client can observe on the instructional materials and activities.
I suggest the orientation of goal-based, which is the objective evaluation approach in the summative procedure. The reason for this orientation is to have the training procedure replicable for future trainees.
The causation factor will be applied in the design evaluation, which is interpreted by the performance and attitude change of the learners such as aging or relationship, which are unrelated to the training program.
Trevor's suggestion for developing a plan for evaluation
For our summative evaluation I propose the indicators of success be measured in terms of customer complaints or a reduction thereof. Errors or lack thereof on the Excel system, and overall quality of tour narration. This last indicator is rather subjective but could be facilitated by a ranking/grading system proposed earlier as an assessment in training. Experienced employees or managers of the museum could join customers in tours and/or customers could be asked to fill out brief surveys in which they share their impression of the tour (post-training).
As this curriculum employs supplantive learning strategies and is singularly focused on improving the performance of museum volunteers in their duties at the museum, I propose that the evaluation be goal-oriented. A goal oriented approach will make target goals and the success of the new training program easier to identify. It will also make failure to meet goals easier to identify thus facilitating any kind of future improvements made in the training program.
Evaluation Proposal (Ann):
For our Formative evaluation we have completed design reviews and expert reviews as we have developed our training design throughout this course.
For our summative evaluation, I propose that we design a goal-based evaluation where we evaluate the “payoff outcomes” of museum volunteer performance against the requirements established by the museum directors, (i.e. as a result of our instruction are the volunteers able to perform the tasks we were hired to train for). To do this, I suggest we conduct a customer satisfaction survey for museum service around leading tours and staffing the information station, where there is an established level of satisfaction that must be met (i.e. 90%) (Level 4). I suggest that this survey be conducted six months after instruction has been completed, and that we work with the museum directors to establish the questions that should be asked on the survey, which identify our “indicators of success” (Ragan and Smith, 2005, pg. 345). A checklist could also be developed and periodic observations could be conducted to see how well volunteers perform against the established criterion to determine the consistency of volunteer performance (Could be designed for Level 2 & Level 3 measures). We could also measure if there has been an X% increase in funds raised for the museum (Level 4).
Reference: Ragan, T. J., & Smith, P. L. (2005) Instructional design (3rd ed.).Hoboken, NJ: John Wiley & Sons, Inc.\
Suzanne's suggestion for evaluation
Before implementing this training with new volunteers I would suggest a formative evaluation by at least one seasoned volunteer and the museum staff that coordinates the volunteers. These evaluators should check for accurate and current information, and to ensure that the learning goals and objectives are complete in training the skills necessary for volunteers. After making the necessary changes I would implement the course and then wait until the second group of volunteers complete the training before moving on to a summative evaluation.
For the summative evaluation I would assess the effectiveness of the training by speaking with volunteers who recently completed the training, using surveys as suggested by Trevor, as well as feedback by seasoned volunteers and staff.
Ongoing evaluation will be necessary as software and museum exhibits change.
Suggestions for Evaluation (Lita)
Prior to training, I would suggest a formative evaluation by the director and identified trainers to determine the weakness in the instruction so that revisions can be made and they are more well-organized and effective for training the volunteers and learning outcome.
Summative evaluation observes the effectiveness of the overall process or training that is carried out. It helps the company in determining if the purpose of the training was fulfilled. For the summative evaluation, I propose evaluation on whether the volunteer is able to understand how to begin, what to do next, and how to proceed. I propose that they are teamed with an experienced volunteer and a checklist is used through all phases of training to identify their knowledge. Volunteers who have successfully completed training can be given a questionnaire to offer feedback on their experience for future training purposes.
Final Evaluation Proposals:
Final Proposal (Ann):
Based on proposals received from all group members, I propose the following, which includes a combination of everyone’s ideas:
Formative Evaluation: Prior to training, I would suggest that design reviews are completed at each development stage. In addition, I suggest a formative” expert review” evaluation session be conducted with museum directors, one seasoned volunteer, and the identified trainers. These evaluators will review the instructional materials for accurate and current information, and to ensure that the learning goals and objectives are in line with the instructional expectations.
After making the necessary changes, we would implement the course and then wait until the second group of volunteers completes the training before moving on to a summative evaluation.
Summative evaluation observes the effectiveness of the overall process or training that is carried out. It helps the company in determining if the purpose of the training was fulfilled. I suggest the orientation of goal-based, which is the objective evaluation approach in the summative procedure. The reason for this orientation is to have the training procedure replicable for future trainees. For our summative evaluation I propose the indicators of success be measured through feedback received from museum customers, employees, and training attendees, such as:
We conduct a customer satisfaction survey for museum service around leading tours and staffing the information station, where there is an established level of satisfaction that must be met (i.e. 90%) (Level 4). Customer complaints (or a reduction of complaints) could also be monitored.
A checklist could also be developed and periodic observations (post training) could be conducted by museum directors or experienced staff, who accompany customers on the tours, to see how well volunteers perform against the established criterion to determine the consistency of volunteer performance. This could include such items as a volunteer’s ability to apply what was learned (i.e. how to begin, what to do next, and how to proceed)
We could also measure if there has been an X% increase in funds raised for the museum, or number of errors in the Excel database.
I suggest that this survey be conducted six months after instruction has been completed, and that we work with the museum directors to establish the questions that should be asked on the survey, which identify our “indicators of success” (Ragan and Smith, 2005, pg. 345).
Final Evaluation Plan:
Our Evaluation Plan includes both formative and summative evaluation processes.
Formative Evaluation focuses on the instructional design process. It is a means of evaluating the strengths and weaknesses of an instructional program as it is being developed. If offers opportunity for any required revisions, prior to the program rollout.
Our Formative Evaluation approach includes design reviews at each development stage to ensure key instructional requirements are being met.
In addition, an expert review evaluation session will be conducted with museum directors, one seasoned volunteer, and the identified trainers, who will review the instructional materials for accurate and current information, and determine if any weaknesses exist in the proposed instruction. Any identified revisions will be made, which ensures a well-organized and effective training for the volunteers, and expected learning outcome are achieved.
Once revisions are complete, we will implement the course and then wait until the second group of volunteers completes the training before moving on to a summative evaluation.
Summative Evaluation focuses on the effectiveness or outcome of the instruction, once completed. It helps to determine if the purpose of the training was fulfilled.
(Note from Ann here: I suggest we use this paragraph as part of our audio for the final evaluation slide – what do others think?) As this curriculum employs supplantive learning strategies and is singularly focused on improving the performance of museum volunteers in their duties at the museum, we propose that the evaluation be goal-oriented. A goal oriented approach will make target goals and the success of our instructional program easier to identify. It will also make failure to meet goals easier to identify, thus facilitating any kind of future improvements made in the training program.
Our Summative Evaluation approach proposes that the indicators of success be measured through feedback received from museum customers, employees, and training attendees.
We will conduct a customer satisfaction survey for museum service around leading tours and staffing the information station; where there is an established level of satisfaction that must be met (we recommend 90%). Customer complaints (or a reduction of complaints) can also be monitored to determine if there is a reduction in complaints.
A checklist will be developed and periodic observations (post training) will be conducted by museum directors or experienced staff, who accompany customers on the tours, to see how well volunteers perform against the established criterion to determine the consistency of volunteer performance. This will include such items as a volunteer’s ability to apply what was learned (i.e. how to begin, what to do next, and how to proceed)
We will also measure if there has been an X% increase in funds raised for the museum, and number of errors in the Excel database to determine instructional outcomes.
We suggest this survey be conducted six months after instruction has been completed, and that we work with the museum directors to establish what questions should be asked on the survey, to identify the success of the instruction.
This week, you and your group members will develop a plan for evaluation to be presented in your curriculum design proposal.
For the purposes of your curriculum design proposal, you will focus on developing a plan for summative evaluation. Consider the procedures for summative evaluation described in your learning resources. What might the goals of evaluation be for your group’s curriculum design? What should the indicators of success be, and what would be the best orientation and design for the evaluation?
By Monday, the Week 5 Facilitator should create a page in the group wiki designated for “Evaluation.”
By Tuesday, post to your group wiki a description of the evaluation plan you would recommend be implemented in conjunction with your curriculum design proposal. Include a description of the indicators of success, the orientation of the evaluation, and how the evaluation would be conducted in order to gather the necessary data.
Return to your group wiki and review the evaluation plans posted by your group members. How do your ideas compare with those of your colleagues? Do some ideas overlap? Do the ideas of your colleagues cause you to have a different perspective?
By Wednesday, each group member should post his or her suggestion for the group’s final plan for evaluation. This suggestion should reflect a combination of what you consider to be the best ideas from all group members.
It is the responsibility of the Week 5 Facilitator to review each group member’s suggestion and create a final plan for evaluation that reflects the majority opinion.
By Thursday, the Week 5 Facilitator should post the final plan for evaluation to the group wiki. The plan should include a description of the indicators of success, the orientation of the evaluation, and at least one specific example of how the evaluation would be conducted in order to gather the necessary data. Group members should visit the wiki, review the plan, and use the wiki or other communication means to resolve any disagreements. In addition, the Facilitator should post the URL of this wiki content to the Week 5 area of the Group Project discussion board.
Your Instructor will visit your group’s wiki to ensure that all group members participated in this assignment and to approve your group’s plan for evaluation.
Note: Please see the Group Project 2 area for a table summarizing the curriculum design proposal tasks due this week. Use this table to help budget your time and ensure you complete assignments ontime.
Kyong’s suggestion on Developing a Plan for Evaluation
The indication of success is determined by the result in the volunteers learning a new skill and knowledge from the training program and applying it to the task. Another indication is that the training program has successfully increased the numbers of patrons and funds to the museum. This indication of success is congruent to the original expectation of the client and the task requested. Therefore, the instructor and client can observe on the instructional materials and activities.
I suggest the orientation of goal-based, which is the objective evaluation approach in the summative procedure. The reason for this orientation is to have the training procedure replicable for future trainees.
The causation factor will be applied in the design evaluation, which is interpreted by the performance and attitude change of the learners such as aging or relationship, which are unrelated to the training program.
Trevor's suggestion for developing a plan for evaluation
For our summative evaluation I propose the indicators of success be measured in terms of customer complaints or a reduction thereof. Errors or lack thereof on the Excel system, and overall quality of tour narration. This last indicator is rather subjective but could be facilitated by a ranking/grading system proposed earlier as an assessment in training. Experienced employees or managers of the museum could join customers in tours and/or customers could be asked to fill out brief surveys in which they share their impression of the tour (post-training).
As this curriculum employs supplantive learning strategies and is singularly focused on improving the performance of museum volunteers in their duties at the museum, I propose that the evaluation be goal-oriented. A goal oriented approach will make target goals and the success of the new training program easier to identify. It will also make failure to meet goals easier to identify thus facilitating any kind of future improvements made in the training program.
Evaluation Proposal (Ann):
For our Formative evaluation we have completed design reviews and expert reviews as we have developed our training design throughout this course.
For our summative evaluation, I propose that we design a goal-based evaluation where we evaluate the “payoff outcomes” of museum volunteer performance against the requirements established by the museum directors, (i.e. as a result of our instruction are the volunteers able to perform the tasks we were hired to train for). To do this, I suggest we conduct a customer satisfaction survey for museum service around leading tours and staffing the information station, where there is an established level of satisfaction that must be met (i.e. 90%) (Level 4). I suggest that this survey be conducted six months after instruction has been completed, and that we work with the museum directors to establish the questions that should be asked on the survey, which identify our “indicators of success” (Ragan and Smith, 2005, pg. 345). A checklist could also be developed and periodic observations could be conducted to see how well volunteers perform against the established criterion to determine the consistency of volunteer performance (Could be designed for Level 2 & Level 3 measures). We could also measure if there has been an X% increase in funds raised for the museum (Level 4).
Reference: Ragan, T. J., & Smith, P. L. (2005) Instructional design (3rd ed.).Hoboken, NJ: John Wiley & Sons, Inc.\
Suzanne's suggestion for evaluation
Before implementing this training with new volunteers I would suggest a formative evaluation by at least one seasoned volunteer and the museum staff that coordinates the volunteers. These evaluators should check for accurate and current information, and to ensure that the learning goals and objectives are complete in training the skills necessary for volunteers. After making the necessary changes I would implement the course and then wait until the second group of volunteers complete the training before moving on to a summative evaluation.
For the summative evaluation I would assess the effectiveness of the training by speaking with volunteers who recently completed the training, using surveys as suggested by Trevor, as well as feedback by seasoned volunteers and staff.
Ongoing evaluation will be necessary as software and museum exhibits change.
Suggestions for Evaluation (Lita)
Prior to training, I would suggest a formative evaluation by the director and identified trainers to determine the weakness in the instruction so that revisions can be made and they are more well-organized and effective for training the volunteers and learning outcome.
Summative evaluation observes the effectiveness of the overall process or training that is carried out. It helps the company in determining if the purpose of the training was fulfilled. For the summative evaluation, I propose evaluation on whether the volunteer is able to understand how to begin, what to do next, and how to proceed. I propose that they are teamed with an experienced volunteer and a checklist is used through all phases of training to identify their knowledge. Volunteers who have successfully completed training can be given a questionnaire to offer feedback on their experience for future training purposes.
Final Evaluation Proposals:
Final Proposal (Ann):
Based on proposals received from all group members, I propose the following, which includes a combination of everyone’s ideas:
Formative Evaluation: Prior to training, I would suggest that design reviews are completed at each development stage. In addition, I suggest a formative” expert review” evaluation session be conducted with museum directors, one seasoned volunteer, and the identified trainers. These evaluators will review the instructional materials for accurate and current information, and to ensure that the learning goals and objectives are in line with the instructional expectations.
After making the necessary changes, we would implement the course and then wait until the second group of volunteers completes the training before moving on to a summative evaluation.
Summative evaluation observes the effectiveness of the overall process or training that is carried out. It helps the company in determining if the purpose of the training was fulfilled. I suggest the orientation of goal-based, which is the objective evaluation approach in the summative procedure. The reason for this orientation is to have the training procedure replicable for future trainees. For our summative evaluation I propose the indicators of success be measured through feedback received from museum customers, employees, and training attendees, such as:
We conduct a customer satisfaction survey for museum service around leading tours and staffing the information station, where there is an established level of satisfaction that must be met (i.e. 90%) (Level 4). Customer complaints (or a reduction of complaints) could also be monitored.
A checklist could also be developed and periodic observations (post training) could be conducted by museum directors or experienced staff, who accompany customers on the tours, to see how well volunteers perform against the established criterion to determine the consistency of volunteer performance. This could include such items as a volunteer’s ability to apply what was learned (i.e. how to begin, what to do next, and how to proceed)
We could also measure if there has been an X% increase in funds raised for the museum, or number of errors in the Excel database.
I suggest that this survey be conducted six months after instruction has been completed, and that we work with the museum directors to establish the questions that should be asked on the survey, which identify our “indicators of success” (Ragan and Smith, 2005, pg. 345).
Final Evaluation Plan:
Our Evaluation Plan includes both formative and summative evaluation processes.
Formative Evaluation focuses on the instructional design process. It is a means of evaluating the strengths and weaknesses of an instructional program as it is being developed. If offers opportunity for any required revisions, prior to the program rollout.
Our Formative Evaluation approach includes design reviews at each development stage to ensure key instructional requirements are being met.
In addition, an expert review evaluation session will be conducted with museum directors, one seasoned volunteer, and the identified trainers, who will review the instructional materials for accurate and current information, and determine if any weaknesses exist in the proposed instruction. Any identified revisions will be made, which ensures a well-organized and effective training for the volunteers, and expected learning outcome are achieved.
Once revisions are complete, we will implement the course and then wait until the second group of volunteers completes the training before moving on to a summative evaluation.
Summative Evaluation focuses on the effectiveness or outcome of the instruction, once completed. It helps to determine if the purpose of the training was fulfilled.
(Note from Ann here: I suggest we use this paragraph as part of our audio for the final evaluation slide – what do others think?) As this curriculum employs supplantive learning strategies and is singularly focused on improving the performance of museum volunteers in their duties at the museum, we propose that the evaluation be goal-oriented. A goal oriented approach will make target goals and the success of our instructional program easier to identify. It will also make failure to meet goals easier to identify, thus facilitating any kind of future improvements made in the training program.
Our Summative Evaluation approach proposes that the indicators of success be measured through feedback received from museum customers, employees, and training attendees.
We will conduct a customer satisfaction survey for museum service around leading tours and staffing the information station; where there is an established level of satisfaction that must be met (we recommend 90%). Customer complaints (or a reduction of complaints) can also be monitored to determine if there is a reduction in complaints.
A checklist will be developed and periodic observations (post training) will be conducted by museum directors or experienced staff, who accompany customers on the tours, to see how well volunteers perform against the established criterion to determine the consistency of volunteer performance. This will include such items as a volunteer’s ability to apply what was learned (i.e. how to begin, what to do next, and how to proceed)
We will also measure if there has been an X% increase in funds raised for the museum, and number of errors in the Excel database to determine instructional outcomes.
We suggest this survey be conducted six months after instruction has been completed, and that we work with the museum directors to establish what questions should be asked on the survey, to identify the success of the instruction.